3 research outputs found

    Privacy-Preserving Classification on Deep Neural Network

    Get PDF
    Neural Networks (NN) are today increasingly used in Machine Learning where they have become deeper and deeper to accurately model or classify high-level abstractions of data. Their development however also gives rise to important data privacy risks. This observation motives Microsoft researchers to propose a framework, called Cryptonets. The core idea is to combine simplifications of the NN with Fully Homomorphic Encryptions (FHE) techniques to get both confidentiality of the manipulated data and efficiency of the processing. While efficiency and accuracy are demonstrated when the number of non-linear layers is small (eg 22), Cryptonets unfortunately becomes ineffective for deeper NNs which let the problem of privacy preserving matching open in these contexts. This work successfully addresses this problem by combining the original ideas of Cryptonets\u27 solution with the batch normalization principle introduced at ICML 2015 by Ioffe and Szegedy. We experimentally validate the soundness of our approach with a neural network with 66 non-linear layers. When applied to the MNIST database, it competes the accuracy of the best non-secure versions, thus significantly improving Cryptonets

    Regulating the Pace of von Neumann Correctors

    Get PDF
    In a celebrated paper published in 1951, von Neumann presented a simple procedure allowing to correct the bias of random sources. This device outputs bits at irregular intervals. However, cryptographic hardware is usually synchronous. This paper proposes a new building block called Pace Regulator, inserted between the randomness consumer and the von Neumann regulator to streamline the pace of random bits

    Regulating the pace of von Neumann correctors

    No full text
    In a famous paper published in 1951 (Natl Bur Stand Appl Math Ser 12:36–38, 1951), von Neumann presented a simple procedure allowing to correct the bias of random sources. This procedure introduces latencies between the random outputs. On the other hand, algorithms such as stream ciphers, block ciphers, or even modular multipliers usually run in a number of clock cycles which are independent of the operands’ values: feeding such hardware blocks with the inherently irregular output of such de-biased sources frequently proves tricky and is challenging to model at the HDL level. We propose an algorithm to compensate these irregularities, by storing or releasing numbers at given intervals of time. This algorithm is modeled as a special queue that achieves zero blocking probability and a near-deterministic service distribution (i.e., of minimal variance). While particularly suited to cryptographic applications, for which it was designed, this algorithm also applies to a variety of contexts and constitutes an example of queue for which the buffer allocation problem can be solved
    corecore